99 research outputs found

    Preliminary System Safety Analysis with Limited Markov Chain Generation

    No full text
    International audienceMarkov chains are a powerful and versatile tool to calculate reliability indicators. However, their use is limited for two reasons: the exponential blow-up of the size of the model, and the di culty to design models. To overcome this second di culty, a solution consists in generating automatically the Markov chain from a higher level description, e.g. a stochastic Petri net or an AltaRica model. These higher level models describe the Markov chain implicitly. In this article, we propose an algorithm to generate partial Markov chains. The idea is to accept a little loss of accuracy in order to limit the size of the generated chain. The cornerstone of this method is a Relevance Factor associated to each state of the chain. This factor enables the selection of the most representative states. We show on an already published test case, that our method provides very accurate results while reducing dramatically the complexity of the assessment. It is worth noticing that the proposed method can be used with different high-level modeling formalisms

    Model synchronization: a formal framework for the management of heterogeneous models

    Get PDF
    International audienceIn this article, we present the conceptual foundations and implementation principles of model synchronization, a formal framework for the management of heterogeneous models. The proposed approach relies on S2ML (System Structure Modeling Language) as a pivot language. We show, by means of a case study, that model synchronization can be used to ensure the consistency between system architecture models designed with Capella and safety models written in AltaRica 3.0

    An attempt to understand complexity in a government digital transformation project

    Get PDF
    Digital transformation projects will become one of the dominating tools for mastering digital transformation in governments. Studies show that such projects are complex undertakings and increasingly difficult to manage. The purpose of the paper is to provide a better understanding of the factors that cause complexity in government digital transformation projects. The authors use an in-depth case study approach to investigate factors of complexity in an ongoing digital transformation project. The results indicate that complexity in this project is rooted in dynamic relationships between multiple dimensions of organization, technologies, and innovation. The authors conclude that when organizational structuring, the introduction of new technology, and efforts to innovate and create added value for citizens and businesses operate in tandem, the pervasive complexity associated with delivering government digital transformation projects becomes increasingly difficult to manage

    Can a Program Reverse-Engineer Itself?

    Get PDF
    Shape-memory alloys are metal pieces that remember their original cold-forged shapes and return to the pre-deformed shape after heating. In this work we construct a software analogous of shape-memory alloys: programs whose code resists obfuscation. We show how to pour arbitrary functions into protective envelops that allow recovering the functions\u27 {\sl exact initial code} after obfuscation. We explicit the theoretical foundations of our method and provide a concrete implementation in Scheme

    A Large-Scale Genetic Analysis Reveals a Strong Contribution of the HLA Class II Region to Giant Cell Arteritis Susceptibility

    Get PDF
    We conducted a large-scale genetic analysis on giant cell arteritis (GCA), a polygenic immune-mediated vasculitis. A case-control cohort, comprising 1,651 case subjects with GCA and 15,306 unrelated control subjects from six different countries of European ancestry, was genotyped by the Immunochip array. We also imputed HLA data with a previously validated imputation method to perform a more comprehensive analysis of this genomic region. The strongest association signals were observed in the HLA region, with rs477515 representing the highest peak (p = 4.05 × 10−40, OR = 1.73). A multivariate model including class II amino acids of HLA-DRβ1 and HLA-DQα1 and one class I amino acid of HLA-B explained most of the HLA association with GCA, consistent with previously reported associations of classical HLA alleles like HLA-DRB1∗04. An omnibus test on polymorphic amino acid positions highlighted DRβ1 13 (p = 4.08 × 10−43) and HLA-DQα1 47 (p = 4.02 × 10−46), 56, and 76 (both p = 1.84 × 10−45) as relevant positions for disease susceptibility. Outside the HLA region, the most significant loci included PTPN22 (rs2476601, p = 1.73 × 10−6, OR = 1.38), LRRC32 (rs10160518, p = 4.39 × 10−6, OR = 1.20), and REL (rs115674477, p = 1.10 × 10−5, OR = 1.63). Our study provides evidence of a strong contribution of HLA class I and II molecules to susceptibility to GCA. In the non-HLA region, we confirmed a key role for the functional PTPN22 rs2476601 variant and proposed other putative risk loci for GCA involved in Th1, Th17, and Treg cell function

    Notes on Computational Uncertainties in Probabilistic Risk/Safety Assessment

    No full text
    In this article, we study computational uncertainties in probabilistic risk/safety assessment resulting from the computational complexity of calculations of risk indicators. We argue that the risk analyst faces the fundamental epistemic and aleatory uncertainties of risk assessment with a bounded calculation capacity, and that this bounded capacity over-determines both the design of models and the decisions that can be made from models. We sketch a taxonomy of modelling technologies and recall the main computational complexity results. Then, based on a review of state of the art assessment algorithms for fault trees and event trees, we make some methodological proposals aiming at drawing conceptual and practical consequences of bounded calculability

    Notes on Computational Uncertainties in Probabilistic Risk/Safety Assessment

    No full text
    In this article, we study computational uncertainties in probabilistic risk/safety assessment resulting from the computational complexity of calculations of risk indicators. We argue that the risk analyst faces the fundamental epistemic and aleatory uncertainties of risk assessment with a bounded calculation capacity, and that this bounded capacity over-determines both the design of models and the decisions that can be made from models. We sketch a taxonomy of modelling technologies and recall the main computational complexity results. Then, based on a review of state of the art assessment algorithms for fault trees and event trees, we make some methodological proposals aiming at drawing conceptual and practical consequences of bounded calculability

    Decision diagram algorithms to extract minimal cutsets of finite degradation models

    No full text
    In this article, we propose decision diagram algorithms to extract minimal cutsets of finite degradation models. Finite degradation models generalize and unify combinatorial models used to support probabilistic risk, reliability and safety analyses (fault trees, attack trees, reliability block diagrams…). They formalize a key idea underlying all risk assessment methods: states of the models represent levels of degradation of the system under study. Although these states cannot be totally ordered, they have a rich algebraic structure that can be exploited to extract minimal cutsets of models, which represent the most relevant scenarios of failure. The notion of minimal cutsets we introduce here generalizes the one defined for fault trees. We show how algorithms used to calculate minimal cutsets can be lifted up to finite degradation models, thanks to a generic decomposition theorem and an extension of the binary decision diagrams technology. We discuss the implementation and performance issues. Finally, we illustrate the interest of the proposed technology by means of the use case stemmed from the oil and gas industry
    corecore